预测中小型企业(SME)的破产风险(SME)是金融机构在做出贷款时的重要一步。但是,金融和AI研究领域的现有研究倾向于仅考虑企业内风险或传染性风险,而忽略了它们的相互作用和组合效应。这项研究首次考虑了在破产预测中的风险及其共同影响。具体而言,我们首先根据其风险内学习的统计学意义企业风险指标提出了企业内风险编码器。然后,我们根据企业关系信息从企业知识图中提出了一个企业传染风险编码器,以进行其传染风险嵌入。特别是,传染风险编码器既包括新提出的高图神经网络和异质图神经网络,这些神经网络可以在两个不同方面建模传播风险,即基于超系统的常见风险因素和直接扩散的风险。为了评估该模型,我们收集了SME上的现实世界多源数据数据,并构建了一个名为SMESD的新型基准数据集。我们提供对数据集的开放访问权限,该数据集有望进一步促进财务风险分析的研究。针对十二个最先进的基线的SMESD实验证明了拟议模型对破产预测的有效性。
translated by 谷歌翻译
股票运动预测(SMP)旨在预测上市公司的股份量股份,由于金融市场的挥发性,这是一个具有挑战性的任务。最近的财务研究表明,动量溢出效应在股票波动中发挥着重要作用。然而,以前的研究通常只学习相关公司之间的简单连接信息,这不可避免地未能模仿真实金融市场中上市公司的复杂关系。为了解决这个问题,我们首先建立一个更全面的市场知识图(MKG),其中包含有限的公司,包括上市公司及其相关的高管,以及包括明确关系和隐性关系的混合关系。之后,我们提出了一种新颖的双重关注网络,以了解基于构造的MKG用于库存预测的势头溢出信号。对九个SOTA基线构建数据集的实证实验表明,所提出的丹林公司能够改善与构造的MKG的库存预测。
translated by 谷歌翻译
双类型的异构图形应用于许多真实情景。然而,以前的异构图形学习研究通常忽略这种异构图中的双键入实体之间的复杂相互作用。为了解决这个问题,在本文中,我们提出了一种新的双重分层关注网络(DHAN),以了解与类内和级别的分层关注网络的双键入异构图中的综合节点表示。具体地,课堂上的注意力旨在从相同类型的邻居中学习节点表示,而级别的关注能够从其不同类型的邻居聚合节点表示。因此,双重关注操作使DHAN不仅能够充分地利用节点帧内邻近信息,而且可以在双键入的异构图中提供帧间相邻信息。关于针对最先进的各种任务的实验结果充分证实了DHAN在学习节点的学习节点综合陈述的能力
translated by 谷歌翻译
自行车分享系统(BSSS)在全球越来越受欢迎,并引起了广泛的研究兴趣。本文研究了BSSS中的需求预测问题。空间和时间特征对于BSSS的需求预测至关重要,但提取了时尚动态的需求是挑战性的。另一个挑战是捕捉时空动力学和外部因素之间的关系,例如天气,一周和一天时间。为了解决这些挑战,我们提出了一个名为MSTF-Net的多个时空融合网络。 MSTF-Net由多个时空块组成:3D卷积网络(3D-CNN)块,Eidetic 3D卷积长短短期存储网络(E3D-LSTM)块,以及完全连接的(FC)块。具体地,3D-CNN嵌段突出显示在每个片段中提取短期时空依赖(即,亲近,期间和趋势); E3D-LSTM块进一步提取对所有碎片的长期时空依赖; FC块提取外部因素的非线性相关性。最后,融合E3D-LSTM和FC块的潜在表示以获得最终预测。对于两个现实世界数据集,显示MSTF-Net优于七种最先进的模型。
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preference than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
translated by 谷歌翻译
Semantic segmentation of UAV aerial remote sensing images provides a more efficient and convenient surveying and mapping method for traditional surveying and mapping. In order to make the model lightweight and improve a certain accuracy, this research developed a new lightweight and efficient network for the extraction of ground features from UAV aerial remote sensing images, called LDMCNet. Meanwhile, this research develops a powerful lightweight backbone network for the proposed semantic segmentation model. It is called LDCNet, and it is hoped that it can become the backbone network of a new generation of lightweight semantic segmentation algorithms. The proposed model uses dual multi-scale context modules, namely the Atrous Space Pyramid Pooling module (ASPP) and the Object Context Representation module (OCR). In addition, this research constructs a private dataset for semantic segmentation of aerial remote sensing images from drones. This data set contains 2431 training sets, 945 validation sets, and 475 test sets. The proposed model performs well on this dataset, with only 1.4M parameters and 5.48G floating-point operations (FLOPs), achieving an average intersection-over-union ratio (mIoU) of 71.12%. 7.88% higher than the baseline model. In order to verify the effectiveness of the proposed model, training on the public datasets "LoveDA" and "CITY-OSM" also achieved excellent results, achieving mIoU of 65.27% and 74.39%, respectively.
translated by 谷歌翻译
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks. Cross Entropy (CE) loss has been shown to be not robust to noisy labels due to its unboundedness. To alleviate this issue, existing works typically design specialized robust losses with the symmetric condition, which usually lead to the underfitting issue. In this paper, our key idea is to induce a loss bound at the logit level, thus universally enhancing the noise robustness of existing losses. Specifically, we propose logit clipping (LogitClip), which clamps the norm of the logit vector to ensure that it is upper bounded by a constant. In this manner, CE loss equipped with our LogitClip method is effectively bounded, mitigating the overfitting to examples with noisy labels. Moreover, we present theoretical analyses to certify the noise-tolerant ability of LogitClip. Extensive experiments show that LogitClip not only significantly improves the noise robustness of CE loss, but also broadly enhances the generalization performance of popular robust losses.
translated by 谷歌翻译
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
translated by 谷歌翻译